10 research outputs found

    Selection Lemmas for various geometric objects

    Full text link
    Selection lemmas are classical results in discrete geometry that have been well studied and have applications in many geometric problems like weak epsilon nets and slimming Delaunay triangulations. Selection lemma type results typically show that there exists a point that is contained in many objects that are induced (spanned) by an underlying point set. In the first selection lemma, we consider the set of all the objects induced (spanned) by a point set PP. This question has been widely explored for simplices in Rd\mathbb{R}^d, with tight bounds in R2\mathbb{R}^2. In our paper, we prove first selection lemma for other classes of geometric objects. We also consider the strong variant of this problem where we add the constraint that the piercing point comes from PP. We prove an exact result on the strong and the weak variant of the first selection lemma for axis-parallel rectangles, special subclasses of axis-parallel rectangles like quadrants and slabs, disks (for centrally symmetric point sets). We also show non-trivial bounds on the first selection lemma for axis-parallel boxes and hyperspheres in Rd\mathbb{R}^d. In the second selection lemma, we consider an arbitrary mm sized subset of the set of all objects induced by PP. We study this problem for axis-parallel rectangles and show that there exists an point in the plane that is contained in m324n4\frac{m^3}{24n^4} rectangles. This is an improvement over the previous bound by Smorodinsky and Sharir when mm is almost quadratic

    On the Structure of Learnability Beyond P/Poly

    Get PDF
    Motivated by the goal of showing stronger structural results about the complexity of learning, we study the learnability of strong concept classes beyond P/poly, such as PSPACE/poly and EXP/poly. We show the following: 1) (Unconditional Lower Bounds for Learning) Building on [Adam R. Klivans et al., 2013], we prove unconditionally that BPE/poly cannot be weakly learned in polynomial time over the uniform distribution, even with membership and equivalence queries. 2) (Robustness of Learning) For the concept classes EXP/poly and PSPACE/poly, we show unconditionally that worst-case and average-case learning are equivalent, that PAC-learnability and learnability over the uniform distribution are equivalent, and that membership queries do not help in either case. 3) (Reducing Succinct Search to Decision for Learning) For the decision problems R_{Kt} and R_{KS} capturing the complexity of learning EXP/poly and PSPACE/poly respectively, we show a succinct search to decision reduction: for each of these problems, the problem is in BPP iff there is a probabilistic polynomial-time algorithm computing circuits encoding proofs for positive instances of the problem. This is shown via a more general result giving succinct search to decision results for PSPACE, EXP and NEXP, which might be of independent interest. 4) (Implausibility of Oblivious Strongly Black-Box Reductions showing NP-hardness of learning NP/poly) We define a natural notion of hardness of learning with respect to oblivious strongly black-box reductions. We show that learning PSPACE/poly is PSPACE-hard with respect to oblivious strongly black-box reductions. On the other hand, if learning NP/poly is NP-hard with respect to oblivious strongly black-box reductions, the Polynomial Hierarchy collapses

    Deterministically Counting Satisfying Assignments for Constant-Depth Circuits with Parity Gates, with Implications for Lower Bounds

    Get PDF
    We give a deterministic algorithm for counting the number of satisfying assignments of any AC^0[oplus] circuit C of size s and depth d over n variables in time 2^(n-f(n,s,d)), where f(n,s,d) = n/O(log(s))^(d-1), whenever s = 2^o(n^(1/d)). As a consequence, we get that for each d, there is a language in E^{NP} that does not have AC^0[oplus] circuits of size 2^o(n^(1/(d+1))). This is the first lower bound in E^{NP} against AC^0[oplus] circuits that beats the lower bound of 2^Omega(n^(1/2(d-1))) due to Razborov and Smolensky for large d. Both our algorithm and our lower bounds extend to AC^0[p] circuits for any prime p

    Distribution-Free Proofs of Proximity

    Full text link
    Motivated by the fact that input distributions are often unknown in advance, distribution-free property testing considers a setting in which the algorithmic task is to accept functions f:[n]→{0,1}f : [n] \to \{0,1\} having a certain property Π\Pi and reject functions that are Ï”\epsilon-far from Π\Pi, where the distance is measured according to an arbitrary and unknown input distribution D∌[n]D \sim [n]. As usual in property testing, the tester is required to do so while making only a sublinear number of input queries, but as the distribution is unknown, we also allow a sublinear number of samples from the distribution DD. In this work we initiate the study of distribution-free interactive proofs of proximity (df-IPP) in which the distribution-free testing algorithm is assisted by an all powerful but untrusted prover. Our main result is a df-IPP for any problem Π∈NC\Pi \in NC, with O~(n)\tilde{O}(\sqrt{n}) communication, sample, query, and verification complexities, for any proximity parameter Ï”>1/n\epsilon>1/\sqrt{n}. For such proximity parameters, this result matches the parameters of the best-known general purpose IPPs in the standard uniform setting, and is optimal under reasonable cryptographic assumptions. For general values of the proximity parameter Ï”\epsilon, our distribution-free IPP has optimal query complexity O(1/Ï”)O(1/\epsilon) but the communication complexity is O~(ϔ⋅n+1/Ï”)\tilde{O}(\epsilon \cdot n + 1/\epsilon), which is worse than what is known for uniform IPPs when Ï”<1/n\epsilon<1/\sqrt{n}. With the aim of improving on this gap, we further show that for IPPs over specialised, but large distribution families, such as sufficiently smooth distributions and product distributions, the communication complexity can be reduced to ϔ⋅n⋅(1/Ï”)o(1)\epsilon\cdot n\cdot(1/\epsilon)^{o(1)} (keeping the query complexity roughly the same as before) to match the communication complexity of the uniform case

    Optimally Deceiving a Learning Leader in Stackelberg Games

    Get PDF
    Recent results have shown that algorithms for learning the optimal commitment in a Stackelberg game are susceptible to manipulation by the follower. These learning algorithms operate by querying the best responses of the follower, who consequently can deceive the algorithm by using fake best responses, typically by responding according to fake payoffs that are different from the actual ones. For this strategic behavior to be successful, the main challenge faced by the follower is to pinpoint the fake payoffs that would make the learning algorithm output a commitment that benefits them the most. While this problem has been considered before, the related literature has only focused on a simple setting where the follower can only choose from a finite set of payoff matrices, thus leaving the general version of the problem unanswered. In this paper, we fill this gap by showing that it is always possible for the follower to efficiently compute (near-)optimal fake payoffs, for various scenarios of learning interaction between the leader and the follower. Our results also establish an interesting connection between the follower’s deception and the leader’s maximin utility: through deception, the follower can induce almost any (fake) Stackelberg equilibrium if and only if the leader obtains at least their maximin utility in this equilibrium

    Improved learning of k-parities

    No full text
    We consider the problem of learning k-parities in the online mistake-bound model: given a hidden vector x∈{0,1}n where the hamming weight of x is k and a sequence of “questions” a1,a2,
∈{0,1}n, where the algorithm must reply to each question with 〈ai,x〉(mod2), what is the best trade-off between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et al. [3] by an exp⁡(k) factor in the time complexity. Next, we consider the problem of learning k-parities in the PAC model in the presence of random classification noise of rate [Formula Presented]. Here, we observe that even in the presence of classification noise of non-trivial rate, it is possible to learn k-parities in time better than (nk/2), whereas the current best algorithm for learning noisy k-parities, due to Grigorescu et al. [9], inherently requires time (nk/2) even when the noise rate is polynomially small.Peer reviewe

    Optimally Deceiving a Learning Leader in Stackelberg Games

    No full text
    Recent results in the ML community have revealed that learning algorithms used to compute the optimal strategy for the leader to commit to in a Stackelberg game, are susceptible to manipulation by the follower. Such a learning algorithm operates by querying the best responses or the payoffs of the follower, who consequently can deceive the algorithm by responding as if their payoffs were much different than what they actually are. For this strategic behavior to be successful, the main challenge faced by the follower is to pinpoint the payoffs that would make the learning algorithm compute a commitment so that best responding to it maximizes the follower's utility, according to the true payoffs. While this problem has been considered before, the related literature only focused on the simplified scenario in which the payoff space is finite, thus leaving the general version of the problem unanswered. In this paper, we fill this gap by showing that it is always possible for the follower to efficiently compute (near-)optimal payoffs for various scenarios of learning interaction between the leader and the follower

    Beyond natural proofs : hardness magnification and locality

    No full text
    Hardness magnification reduces major complexity separations (such as EXP ⊈ NC1) to proving lower bounds for some natural problem Q against weak circuit models. Several recent works [11, 13, 14, 40, 42, 43, 46] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than Q, while Q itself is susceptible to lower bounds, but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: – Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich [51]? – Can we adapt known lower-bound techniques to establish the desired lower bound for Q? We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit-size problem imply the non-existence of natural proofs. As the non-existence of natural proofs implies the non-existence of efficient learning algorithms, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower-bound techniques to prove strong lower bounds via magnification. This is captured by a locality barrier: existing magnification theorems unconditionally show that the problems Q considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower-bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification
    corecore